160 research outputs found
Software Defined Networks based Smart Grid Communication: A Comprehensive Survey
The current power grid is no longer a feasible solution due to
ever-increasing user demand of electricity, old infrastructure, and reliability
issues and thus require transformation to a better grid a.k.a., smart grid
(SG). The key features that distinguish SG from the conventional electrical
power grid are its capability to perform two-way communication, demand side
management, and real time pricing. Despite all these advantages that SG will
bring, there are certain issues which are specific to SG communication system.
For instance, network management of current SG systems is complex, time
consuming, and done manually. Moreover, SG communication (SGC) system is built
on different vendor specific devices and protocols. Therefore, the current SG
systems are not protocol independent, thus leading to interoperability issue.
Software defined network (SDN) has been proposed to monitor and manage the
communication networks globally. This article serves as a comprehensive survey
on SDN-based SGC. In this article, we first discuss taxonomy of advantages of
SDNbased SGC.We then discuss SDN-based SGC architectures, along with case
studies. Our article provides an in-depth discussion on routing schemes for
SDN-based SGC. We also provide detailed survey of security and privacy schemes
applied to SDN-based SGC. We furthermore present challenges, open issues, and
future research directions related to SDN-based SGC.Comment: Accepte
Graph-based Heuristic Solution for Placing Distributed Video Processing Applications on Moving Vehicle Clusters
Vehicular fog computing (VFC) is envisioned as an extension of cloud and mobile edge computing to utilize the rich sensing and processing resources available in vehicles. We focus on slow-moving cars that spend a significant time in urban traffic congestion as a potential pool of onboard sensors, video cameras, and processing capacity. For leveraging the dynamic network and processing resources, we utilize a stochastic mobility model to select nodes with similar mobility patterns. We then design two distributed applications that are scaled in real-time and placed as multiple instances on selected vehicular fog nodes. We handle the unstable vehicular environment by a), Using real vehicle density data to build a realistic mobility model that helps in selecting nodes for service deployment b), Using communitydetection algorithms for selecting a robust vehicular cluster using the predicted mobility behavior of vehicles. The stability of the chosen cluster is validated using a graph centrality measure, and c), Graph-based placement heuristics is developed to find the optimal placement of service graphs based on a multi-objective constrained optimization problem with the objective of efficient resource utilization. The heuristic solves an important problem of processing data generated from distributed devices by balancing the trade-off between increasing the number of service instances to have enough redundancy of processing instances to increase resilience in the service in case of node or link failure, versus reducing their number to minimize resource usage. We compare our heuristic to a mixed integer program (MIP) solution and a first-fit heuristic. Our approach performs better than these comparable schemes in terms of resource utilization and/or has a lesser service latency when compared to an edge computingbased service placement scheme
Scaling and Placing Distributed Services on Vehicle Clusters in Urban Environments
Many vehicles spend a significant amount of time in urban traffic congestion. Due to the evolution of autonomous vehicles, driver assistance systems, and in-vehicle entertainment, these vehicles have plentiful computational and communication capacity. How can we deploy data collection and processing tasks on these (slowly) moving vehicles to productively use any spare resources? To answer this question, we study the efficient placement of distributed services on a moving vehicle cluster. We present a macroscopic flow model for an intersection in Dublin, Ireland, using real vehicle density data. We show that such aggregate flows are highly predictable (even though the paths of individual vehicles are not known in advance), making it viable to deploy services harnessing vehicles’ sensing capabilities. After studying the feasibility of using these vehicle clusters as infrastructure, we introduce a detailed mathematical specification for a task-based, distributed service placement model. The distributed service scales according to the resource requirements and is robust to the changes caused by the mobility of the cluster. We formulate this as a constrained optimization problem, with the objective of minimizing overall processing and communication costs. Our results show that jointly scaling tasks and finding a mobility-aware, optimal placement results in reduced processing and communication costs compared to the two schemes in the literature. We compare our approach to an autonomous vehicular edge computing-based naive solution and a clustering-based solution
An Aglorithm for Two Phase Rating of Dynamically Composed Services
Many computer science researchers are pursuing the vision of service-oriented software architectures through which end-users can seamlessly access customised, potentially disposable services to aid them carry out a myriad of everyday tasks. Full realisation of this vision requires deployment of facilities for the dynamic discovery, composition, interoperation and execution monitoring of preexisting networked software services, possibly administered by different organisations or originated by multiple developers. Significant research efforts are addressing the development of frameworks and process-oriented techniques for service composition, much of it focussing on specification of services in terms of formal process semantics. Although increasingly powerful methodologies, languages and algorithms supporting the construction, execution and adaptation of dynamically composed services have emerged, little attention has been paid to the supporting infrastructure necessary for their widespread deployment. In particular, accounting systems target charging of services on a one-by-one basis; they do not consider the possibility that services can be collectively orchestrated in an arbitrary manner to fulfil changing requirements. Existing accounting systems, including their rating engines, typically are manually configured to account for specific services at the time those services are initially deployed. However, in environments where services can be dynamically composed this approach is no longer possible: service compositions are created and executed within a short time span, so there is no time for manual configuration of appropriate accounting operations. Accounting operations must be automatically configured when service compositions are initially constructed, or subsequently modified. In this paper we present a two-phase rating process incorporating an algorithm that generation of charge for dynamically composed services. The algorithm treats composed services as a tree structure in which groups of services comprise a composed service which can itself be part of a group of services comprising a composed service at the next level upwards of the tree. The tree is traversed in a depth first fashion, in order to attribute charges to all composed services. To do so the algorithm calculates changes in charges for those services based on the presence or lack of presence of named services in the group of service comprising that composed service. Application of this algorithm enables the rating engine to generate charges for services that are dynamically composed and for which the rating engine can have no prior knowledge. This method also provides a means to closely map real-world business relationships between service providers to the charges applied when their services are used together
Waterford Institute of Technology: Authorship and Data Retention Policy
This document outlines the policies relating to authorship of research papers and other scholarly artefacts and the retention of any data used in their creation
Strategy and its discontents: the place of strategy in national policymaking
This paper presents a collection of views about the definition, role, purpose and health of strategic policymaking.
Introduction
One of the liveliest debates to have taken place on ASPI’s blog, The Strategist, concerned the place of strategy in Canberra’s policymaking community. It seems that there’s little consensus around what strategy’s core business should be, let alone who should practice it and whether indeed enough strategy is being done by DFAT, Defence or other parts of government.
The 11 short pieces printed here by eight authors with quite diverse perspectives span a broad range of views about the definition, role, purpose and health of strategic policymaking. There’s no more important debate in public policy than on the place of strategy in meeting complex national challenges. This paper hopefully will encourage a more structured debate about strategy’s place at the heart of national policymaking
Harnessing Models for Policy Conflict Analysis
Policy conflict analysis processes based solely on the examination of
policy language constructs can not readily discern the semantics associated with
the managed system for which the policies are being defined. However, by
developing analysis processes that can link the constructs of a policy language
to the entities of an information model, we can harness knowledge relating to
relationships and associations, constraint information, behavioural
specifications codified by finite state machines, and extensive semantic
information expressed via ontologies to provide powerful policy analysis
processes
A Generic Algorithm for Mid-call Audio Codec Switching
We present and evaluate an algorithm that performs
in-call selection of the most appropriate audio codec given
prevailing conditions on the network path between the endpoints
of a voice call. We have studied the behaviour of different
codecs under varying network conditions, in doing so deriving
the impairment factors for non-ITU-T codecs so that the E-model
can be used to assess voice call quality for them. Moreover, we
have studied the drawbacks of codec switching from the end
user perception point of view; our switching algorithm seeks to
minimise this impact. We have tested our algorithm on different
packages that contain a selection of the most commonly used
codecs: G.711, SILK, ILBC, GSM and SPEEX. Our results show
that in many typical network scenarios, our switching codecs
mid-call algorithm results in better Quality of Experience (QoE)
than would have been achieved had the initial codec been used
throughout the call
An experimental testbed to predict the performance of XACML Policy Decision Points
The performance and scalability of access control
systems is a growing concern as organisations deploy ever more complex communications and content management systems. This paper describes how an (offline) experimental testbed may be used to address performance concerns. To begin, timing measurements are collected from a server component incorporating the Policy Decision Point (PDP) under test, using representative policies and corresponding requests. Our experiments with two XACML PDP implementations show that measured request service times are typically clustered by request type; thus an algorithm for request cluster identification is presented. Cluster characterisations are used as inputs to a PDP performance model for a given policy/request mix and an analytic (queueing) model is used to estimate the equilibrium server load for different mixes of request clusters. The analytic performance prediction model is validated and extended by discrete event simulation of a PDP subject to additional load. These predictive models enable network administrators to explore the capacity of the PDP for different overall loadings (requests per unit time) and profiles (relative frequencies) of requests
Context-awareness and the smart grid: Requirements and challenges
New intelligent power grids (smart grids) will be an essential way of improving efficiency in power supply and power consumption, facilitating the use of distributed and renewable resources on the supply side and providing consumers with a range of tailored services on the consumption side. The delivery of efficiencies and advanced services in a smart grid will require both a comprehensive overlay communications network and flexible software platforms that can process data from a variety of sources, especially electronic sensor networks. Parallel developments in autonomic systems, pervasive computing and context-awareness (relating in particular to data fusion, context modelling, and semantic data) could provide key elements in the development of scalable smart grid data management systems and applications that utilise a multi-technology communications network. This paper describes: (1) the communications and data management requirements of the emerging smart grid, (2) state-of-the-art techniques and systems for context-awareness and (3) a future direction towards devising a context-aware middleware platform for the smart grid, as well as associated requirements and challenges
- …